alternating direction method
A New Alternating Direction Method for Linear Programming
However, such a rate is related to the problem dimension and the algorithm exhibits a slow and fluctuating ``tail convergence'' in practice. In this paper, we propose a new variable splitting method of LP and prove that our method has a convergence rate of $O(\|\mathbf{A}\|^2\log(1/\epsilon))$. The proof is based on simultaneously estimating the distance from a pair of primal dual iterates to the optimal primal and dual solution set by certain residuals. In practice, we result in a new first-order LP solver that can exploit both the sparsity and the specific structure of matrix $\mathbf{A}$ and a significant speedup for important problems such as basis pursuit, inverse covariance matrix estimation, L1 SVM and nonnegative matrix factorization problem compared with current fastest LP solvers.
Trajectory of Alternating Direction Method of Multipliers and Adaptive Acceleration
The alternating direction method of multipliers (ADMM) is one of the most widely used first-order optimisation methods in the literature owing to its simplicity, flexibility and efficiency. Over the years, numerous efforts are made to improve the performance of the method, such as the inertial technique. By studying the geometric properties of ADMM, we discuss the limitations of current inertial accelerated ADMM and then present and analyze an adaptive acceleration scheme for the method. Numerical experiments on problems arising from image processing, statistics and machine learning demonstrate the advantages of the proposed acceleration approach.
A New Alternating Direction Method for Linear Programming
However, such a rate is related to the problem dimension and the algorithm exhibits a slow and fluctuating ``tail convergence'' in practice. In this paper, we propose a new variable splitting method of LP and prove that our method has a convergence rate of $O(\|\mathbf{A}\|^2\log(1/\epsilon))$. The proof is based on simultaneously estimating the distance from a pair of primal dual iterates to the optimal primal and dual solution set by certain residuals. In practice, we result in a new first-order LP solver that can exploit both the sparsity and the specific structure of matrix $\mathbf{A}$ and a significant speedup for important problems such as basis pursuit, inverse covariance matrix estimation, L1 SVM and nonnegative matrix factorization problem compared with current fastest LP solvers.
- North America > United States > Ohio (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Asia > Middle East > Jordan (0.04)
1aa48fc4880bb0c9b8a3bf979d3b917e-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors present an unnamed algorithm for recovering a degree three tensor from a noisy version, where the noise is due to (a) slice misalignment and (b) sparse noise. The authors present a loss function for the model which they optimize by an ADMM-inspired gradient descent heuristic. The authors compare their method on real and synthetic image data to different implementations of RASL and and an algorithm from [14] which they call Li's work and show that their algorithm has lower recovery error. The paper is clearly written and the idea of performing alignment and denoising on multiple images at once seems to be novel, while the reviewer is not a full expert on tensor methods in image processing and cannot finally settle the question of originality of the application.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Somerset > Bath (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Asia > China > Hong Kong (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada (0.04)
- Research Report > New Finding (0.41)
- Research Report > Experimental Study (0.41)
Trajectory of Alternating Direction Method of Multipliers and Adaptive Acceleration
The alternating direction method of multipliers (ADMM) is one of the most widely used first-order optimisation methods in the literature owing to its simplicity, flexibility and efficiency. Over the years, numerous efforts are made to improve the performance of the method, such as the inertial technique. By studying the geometric properties of ADMM, we discuss the limitations of current inertial accelerated ADMM and then present and analyze an adaptive acceleration scheme for the method. Numerical experiments on problems arising from image processing, statistics and machine learning demonstrate the advantages of the proposed acceleration approach.
- North America > United States > Ohio (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Asia > Middle East > Jordan (0.04)
Optimal Distributed Multi-Robot Communication-Aware Trajectory Planning using Alternating Direction Method of Multipliers
Mikkelsen, Jeppe Heini, Galeazzi, Roberto, Fumagalli, Matteo
This paper presents a distributed, optimal, communication-aware trajectory planning algorithm for multi-robot systems. Building on prior work, it addresses the multi-robot communication-aware trajectory planning problem using a general optimisation framework that imposes linear constraints on changes in robot positions to ensure communication performance and collision avoidance. In this paper, the optimisation problem is solved distributively by separating the communication performance constraint through an economic approach. Here, the current communication budget is distributed equally among the robots, and the robots are allowed to trade parts of their budgets with each other. The separated optimisation problem is then solved using the consensus alternating direction method of multipliers. The method was verified through simulation in an inspection task problem.